13 research outputs found

    AI-Supported Assessment of Load Safety

    Full text link
    Load safety assessment and compliance is an essential step in the corporate process of every logistics service provider. In 2020, a total of 11,371 police checks of trucks were carried out, during which 9.6% (1091) violations against the load safety regulations were detected. For a logistic service provider, every load safety violation results in height fines and damage to reputation. An assessment of load safety supported by artificial intelligence (AI) will reduce the risk of accidents by unsecured loads and fines during safety assessments. This work shows how photos of the load, taken by the truck driver or the loadmaster after the loading process, can be used to assess load safety. By a trained two-stage artificial neural network (ANN), these photos are classified into three different classes I) cargo loaded safely, II) cargo loaded unsafely, and III) unusable image. By applying several architectures of convolutional neural networks (CNN), it can be shown that it is possible to distinguish between unusable and usable images for cargo safety assessment. This distinction is quite crucial since the truck driver and the loadmaster sometimes provide photos without the essential image features like the case structure of the truck and the whole cargo. A human operator or another ANN will then assess the load safety within the second stage.Comment: 9 pages, 4 figures, 2 table

    Visual Analytics of Gaze Data with Standard Multimedia Player

    Get PDF
    With the increasing number of studies, where participants’ eye movements are tracked while watching videos, the volume of gaze data records is growing tremendously. Unfortunately, in most cases, such data are collected in separate files in custom-made or proprietary data formats. These data are difficult to access even for experts and effectively inaccessible for non-experts. Normally expensive or custom-made software is necessary for their analysis. We address this problem by using existing multimedia container formats for distributing and archiving eye-tracking and gaze data bundled with the stimuli data. We define an exchange format that can be interpreted by standard multimedia players and can be streamed via the Internet. We convert several gaze data sets into our format, demonstrating the feasibility of our approach and allowing to visualize these data with standard multimedia players. We also introduce two VLC player add-ons, allowing for further visual analytics. We discuss the benefit of gaze data in a multimedia container and explain possible visual analytics approaches based on our implementations, converted datasets, and first user interviews

    Reducing the Pill Burden: Immunosuppressant Adherence and Safety after Conversion from a Twice-Daily (IR-Tac) to a Novel Once-Daily (LCP-Tac) Tacrolimus Formulation in 161 Liver Transplant Patients

    Get PDF
    Non-adherence to immunosuppressant therapy reduces long-term graft and patient survival after solid organ transplantation. The objective of this 24-month prospective study was to determine adherence, efficacy and safety after conversion of stable liver transplant (LT) recipients from a standard twice-daily immediate release Tacrolimus (IR-Tac) to a novel once-daily life cycle pharma Tacrolimus (LCP-Tac) formulation. We converted a total of 161 LT patients at baseline, collecting Tacrolimus trough levels, laboratories, physical examination data and the BAASIS(C) questionnaire for self-reported adherence to immunosuppression at regular intervals. With 134 participants completing the study period (17% dropouts), the overall adherence to the BAASIS(C) increased by 57% until month 24 compared to baseline (51% vs. 80%). Patients who required only a morning dose of their concomitant medications reported the largest improvement in adherence after conversion. The intra-patient variability (IPV) of consecutive Tacrolimus trough levels after conversion did not change significantly compared to pre-conversion levels. Despite reducing the daily dose by 30% at baseline as recommended by the manufacturer, Tac-trough levels remained stable, reflected by an increase in the concentration-dose (C/D) ratio. No episodes of graft rejection or loss occurred. Our data suggest that the use of LCP-Tac in liver transplant patients is safe and can increase adherence to immunosuppression compared to conventional IR-Tac

    Neue Urbane Produktion : ein Wegweiser für das Bergische Städtedreieck

    Get PDF
    Regionale Produkte sind im Trend. Kreative Manufakturen, offene Werkstätten und moderne Fertigungsmethoden verhelfen dem Handwerk in der Stadt zu einer Renaissance. Was ist daran eigentlich das Neue? Und warum schlummert darin so ein großes Potenzial für einen nachhaltigen Wohlstand und für lebenswerte Quartiere? Knapp drei Jahre beforschte, förderte und vernetzte ein Projektteam aus Utopiastadt, dem Wuppertal Institut und dem transzent die Pioniere einer neuen Produktivität in der Region. Nun ist es an der Zeit, Bilanz zu ziehen - und nach vorne zu schauen, wo am Horizont die Visionen einer lebenswerten und produktiven Stadt von Morgen greifbar werden. Der vorliegende Wegweiser ist die Essenz aus drei Jahren Forschung, Praxis und Dialog. Er weist eine neue Richtung für die Region und ihre gestaltenden Akteure. Ob Wirtschaftsförderung, Stadtverwaltung, Zivilgesellschaft, Gründerszene, Unternehmen oder Wissenschaft: Wir laden dazu ein, den Weg gemeinsam zu beschreiten

    Interaktive 3D-Rekonstruktion

    No full text
    Applicable image-based reconstruction of three-dimensional (3D) objects offers many interesting industrial as well as private use cases, such as augmented reality, reverse engineering, 3D printing and simulation tasks. Unfortunately, image-based 3D reconstruction is not yet applicable to these quite complex tasks, since the resulting 3D models are single, monolithic objects without any division into logical or functional subparts. This thesis aims at making image-based 3D reconstruction feasible such that captures of standard cameras can be used for creating functional 3D models. The research presented in the following does not focus on the fine-tuning of algorithms to achieve minor improvements, but evaluates the entire processing pipeline of image-based 3D reconstruction and tries to contribute at four critical points, where significant improvement can be achieved by advanced human-computer interaction: (i) As the starting point of any 3D reconstruction process, the object of interest (OOI) that should be reconstructed needs to be annotated. For this task, novel pixel-accurate OOI annotation as an interactive process is presented, and an appropriate software solution is released. (ii) To improve the interactive annotation process, traditional interface devices, like mouse and keyboard, are supplemented with human sensory data to achieve closer user interaction. (iii) In practice, a major obstacle is the so far missing standard for file formats for annotation, which leads to numerous proprietary solutions. Therefore, a uniform standard file format is implemented and used for prototyping the first gaze-improved computer vision algorithms. As a sideline of this research, analogies between the close interaction of humans and computer vision systems and 3D perception are identified and evaluated. (iv) Finally, to reduce the processing time of the underlying algorithms used for 3D reconstruction, the ability of artificial neural networks to reconstruct 3D models of unknown OOIs is investigated. Summarizing, the gained improvements show that applicable image-based 3D reconstruction is within reach but nowadays only feasible by supporting human-computer interaction. Two software solutions, one for visual video analytics and one for spare part reconstruction are implemented. In the future, automated 3D reconstruction that produces functional 3D models can be reached only when algorithms become capable of acquiring semantic knowledge. Until then, the world knowledge provided to the 3D reconstruction pipeline by human computer interaction is indispensable

    Structure from Neuronal Networks (SfN²)

    No full text

    Faroe Islands rephotography image registration dataset

    No full text
    Over 200 georeferenced registered rephotographic compilations of the Faroe Islands are provided in this dataset. The position of each compilation is georeferenced and thus locatable on a map. Each compilation consists of a historical and a corresponding contemporary image showing the same scene. With steady object features, these two images of the same geolocation are aligned pixel accurately. In the summer of 2022, all contemporary images were photographed by A. Schaffland, while historical images were retrieved from the National Museum of Denmark collections. Images show Faroese landscape and cultural heritage sites, focusing on relevant areas when the historical images were taken, e.g., Kirkjubøur, Tórshavn, and Saksun. Historic images date from the end of the 19th century to the middle of the 20th century. The historical images were taken by scientists, surveyors, archaeologists, and painters. All historical images are in the public domain, have no known rights, or are shared under a CC license. The contemporary images by A. Schaffland are released under CC BY-NC-SA 4.0. The dataset is organized as a GIS project. Historic images, not already georeferenced, were referenced with street view services. All historical images were added to the GIS database, containing camera position, viewing direction, etc. Each compilation can be displayed as an arrow from the camera position along the view direction on a map. Contemporary images were registered to historical images using a specialized tool. None or only a suboptimal rephotograph could be taken for some historical images. These historical images are still added to the database together with all other original images, providing additional data for improvements in rephotography methods in the upcoming years. The resulting image pairs can be used in image registration, landscape change, urban development, and cultural heritage research. Further, the database can be used for public engagement in heritage and as a benchmark for further rephotography and time-series projects

    AI-in-the-Loop -- The impact of HMI in AI-based Application

    No full text
    Artificial intelligence (AI) and human-machine interaction (HMI) are two keywords that usually do not fit embedded applications. Within the steps needed before applying AI to solve a specific task, HMI is usually missing during the AI architecture design and the training of an AI model. The human-in-the-loop concept is prevalent in all other steps of developing AI, from data analysis via data selection and cleaning to performance evaluation. During AI architecture design, HMI can immediately highlight unproductive layers of the architecture so that lightweight network architecture for embedded applications can be created easily. We show that by using this HMI, users can instantly distinguish which AI architecture should be trained and evaluated first since a high accuracy on the task could be expected. This approach reduces the resources needed for AI development by avoiding training and evaluating AI architectures with unproductive layers and leads to lightweight AI architectures. These resulting lightweight AI architectures will enable HMI while running the AI on an edge device. By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths. In our AI-in-the-loop approach, the AI remains the working horse and primarily solves the task. If the AI is unsure whether its inference solves the task correctly, it asks the user to use an appropriate HMI. Consequently, AI will become available in many applications soon since HMI will make AI more reliable and explainable

    Safe and Trustful AI for Closed-Loop Control Systems

    No full text
    In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with remarkable precision. However, the development, modeling, tuning, and optimization of CLCSs barely exploit the potential of artificial intelligence (AI). This paper explores novel opportunities and research directions in CLCS engineering, presenting potential designs and methodologies incorporating AI. Combining these opportunities and directions makes it evident that employing AI in developing and implementing CLCSs is indeed feasible. Integrating AI into CLCS development or AI directly within CLCSs can lead to a significant improvement in stakeholder confidence. Integrating AI in CLCSs raises the question: How can AI in CLCSs be trusted so that its promising capabilities can be used safely? One does not trust AI in CLCSs due to its unknowable nature caused by its extensive set of parameters that defy complete testing. Consequently, developers working on AI-based CLCSs must be able to rate the impact of the trainable parameters on the system accurately. By following this path, this paper highlights two key aspects as essential research directions towards safe AI-based CLCSs: (I) the identification and elimination of unproductive layers in artificial neural networks (ANNs) for reducing the number of trainable parameters without influencing the overall outcome, and (II) the utilization of the solution space of an ANN to define the safety-critical scenarios of an AI-based CLCS

    Feeling Hungry : Association of Dietary Patterns with Food Choices using Scene Perception

    No full text
    Studies on nutrition have historically concentrated on food-shortages and over-nutrition. The physiological states of feeling hungry or being satiated and its dynamics in food choices, dietary patterns, and nutritional behavior, have not been the focus of many studies. Currently, visual analytic using easy-to-use tooling offers applicability in a wide-range of disciplines. In this interdisciplinary pilot-study we tested a novel visual analytic software to assess dietary patterns and food choices for greater understanding of nutritional behavior when hungry and when satiated. We developed software toolchain and tested the hypotheses that there is no difference between visual search patterns of dishes 1) when hungry and when satiated and 2) in being vegetarian and non-vegetarian. Results indicate that food choices can be deviant from dietary patterns but correlate slightly with dish-gazing. Further, scene perception probably could vary between being hungry and satiated. Understanding t he complicated relationship between scene perception and nutritional behavioral patterns and scaling up this pilot-study to a full-study using our introduced software approaches is indispensable
    corecore